22 research outputs found

    Scale-Free Phenomena in Communication Networks: A Cross-Atlantic Comparison

    Get PDF
    ?Small-world networks? have a high degree of local clustering or cliqueness, like a regular lattice and a relatively short average minimum path, like a completely random network. The huge appeal of ?small-world networks? lies in the impact they are said to have on dynamical systems. In a transportation network, ?small-world? topology could improve the flow of people or goods through the network, which has important implications for the design of such networks. Preliminary research has shown that ?small-world network? phenomenon can arise in traffic networks possessing ?small-world? network topology (i.e., in a network that has a structure somewhere in between a regular lattice and random graph) and that, at least under certain circumstances, traffic appears to flow more efficiently through a network with such topology (Schintler and Kulkarni, 2000). This paper will explore this further through simulation under varying assumptions regarding the size of the network (i.e., in terms of number of nodes and edges), the level of traffic in the network, the uniformity of nodes and edges and the information levels of travelers in the network. The simulations will be done using the random rewiring process introduced by Watts and Strogatz (1998), where each time the network is rewired, the distribution of traffic and congestion through the network, and the ?small-world? network parameters, shortest average minimum path and clustering coefficient, will be examined. Traffic flow will be estimated using a gravity model framework and a route choice optimization program. The simulations will also be used to reveal whether or not there are certain nodes or links that suffer at the expense of the entire network becoming more efficient. In addition, the possibility of a self-organised criticality (SOC) structure will be examined. The concept, introduced by Bak et al.,(1987), gained a great deal of attention in past decades for its capability to explore the significant and structural transformation of a dynamic system. SOC sets out how prominent exogenous forces together with strong localized interactions at the micro level lead a system to a critical state at the macro-level. A further step in our analysis is the investigation of whether a power-law distribution, characteristic of the SOC state, evolves in the traffic network. While ?small-world? network topology may be shown to improve the efficiency of traffic flow through a network, it should be recognized that ?small-world? networks are sparse by nature. The shut down or major disruption of any link in such a network, particularly one with heavy congestion, could provoke significant disorder. This paper will also explore the effect that disruptions of this nature have on networks designed with a high degree of local clustering and a short average minimum path. The fact that a ?small-world? network is sparse also raises other issues for the transportation planner. If ?small-world? topology is in fact a desirable property for transportation networks, how do we transform existing networks to produce these results? Unlike other networks, such as those for telecommunications or socialization, a transportation network cannot be rewired to achieve a more efficient network structure. This issue will also be addressed in the paper. REFERENCES Bak, P., C. Tang, and K. Wiesenfeld (1987), ?Self-Organised Criticality?, Physical Review Letters, Vol. 59 (4), pp. 381-384. Watts, D.J. and S.H. Strogatz (1998). ?Collective Dynamics of ?Small-World? Networks? Nature, Vol 393, 4, pp. 440-442. Schintler, L.A. and R. Kulkarni (2000). ?The Emergence of Small-World Phenonmenon in Urban Transportation Networks? in Reggiani, A. (ed.), Spatial Economic Science: New Frontiers in Theory and Methodology, Springer-Verlag, Berlin-NewYork, pp. 419-434.

    The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges

    Get PDF
    Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.Series: Working Papers in Regional Scienc

    Big Data and Regional Science: Opportunities, Challenges, and Directions for Future Research

    Get PDF
    Recent technological, social, and economic trends and transformations are contributing to the production of what is usually referred to as Big Data. Big Data, which is typically defined by four dimensions -- Volume, Velocity, Veracity, and Variety -- changes the methods and tactics for using, analyzing, and interpreting data, requiring new approaches for data provenance, data processing, data analysis and modeling, and knowledge representation. The use and analysis of Big Data involves several distinct stages from "data acquisition and recording" over "information extraction" and "data integration" to "data modeling and analysis" and "interpretation", each of which introduces challenges that need to be addressed. There also are cross-cutting challenges, which are common challenges that underlie many, sometimes all, of the stages of the data analysis pipeline. These relate to "heterogeneity", "uncertainty", "scale", "timeliness", "privacy" and "human interaction". Using the Big Data analysis pipeline as a guiding framework, this paper examines the challenges arising in the use of Big Data in regional science. The paper concludes with some suggestions for future activities to realize the possibilities and potential for Big Data in regional science.Series: Working Papers in Regional Scienc

    A Critical Examination of the Ethics of AI-Mediated Peer Review

    Full text link
    Recent advancements in artificial intelligence (AI) systems, including large language models like ChatGPT, offer promise and peril for scholarly peer review. On the one hand, AI can enhance efficiency by addressing issues like long publication delays. On the other hand, it brings ethical and social concerns that could compromise the integrity of the peer review process and outcomes. However, human peer review systems are also fraught with related problems, such as biases, abuses, and a lack of transparency, which already diminish credibility. While there is increasing attention to the use of AI in peer review, discussions revolve mainly around plagiarism and authorship in academic journal publishing, ignoring the broader epistemic, social, cultural, and societal epistemic in which peer review is positioned. The legitimacy of AI-driven peer review hinges on the alignment with the scientific ethos, encompassing moral and epistemic norms that define appropriate conduct in the scholarly community. In this regard, there is a "norm-counternorm continuum," where the acceptability of AI in peer review is shaped by institutional logics, ethical practices, and internal regulatory mechanisms. The discussion here emphasizes the need to critically assess the legitimacy of AI-driven peer review, addressing the benefits and downsides relative to the broader epistemic, social, ethical, and regulatory factors that sculpt its implementation and impact.Comment: 21 pages, 1 figur

    A New Method for Assessing the Resiliency of Large, Complex Networks

    Get PDF
    Designing resilient and reliable networks is a principle concern of planners and private firms. Traffic congestion whether recurring or as the result of some aperiodic event is extremely costly. This paper describes an alternative process and a model for analyzing the resiliency of networks that address some of the shortcomings of more traditional approaches – e.g., the four-step modeling process used in transportation planning. It should be noted that the authors do not view this as a replacement to current approaches but rather as a complementary tool designed to augment analysis capabilities. The process that is described in this paper for analyzing the resiliency of a network involves at least three steps: 1. assessment or identification of important nodes and links according to different criteria 2. verification of critical nodes and links based on failure simulations and 3. consequence. Raster analysis, graph-theory principles and GIS are used to develop a model for carrying out each of these steps. The methods are demonstrated using two, large interdependent networks for a metropolitan area in the United States.

    Managing pavement in a busy urban highway network

    No full text
    In this dissertation, two dynamic transportation models are formulated and analyzed. When employed in optimal control exercises, these models determine for some metropolitan highway a set of optimal transportation policies, specifically relating to the problem of pavement management. The first model characterizes over some medium-range planning horizon changes in peak-period traffic volumes, pavement conditions, vehicle miles traveled, and volatile organic compounds emissions. When used in optimal control exercises, this model determines the sequence and timing of maintenance, lane expansion, and transportation demand management measures which will achieve desired peak-period traffic levels, reduce VMTs and VOC emissions to levels mandated by the Clean Air Act amendments, and maintain pavement conditions at acceptable design standards. The second model determines for some medium-range planning horizon, the sequence and timing of highway reconstruction, given some predetermined program of maintenance expenditures, lane expansion, and transportation demand management measures, that minimizes agency costs and system travel times. Numerical simulations of both models are performed, and for the latter, a sensitivity analysis conducted.U of I OnlyETDs are only available to UIUC Users without author permissio
    corecore